Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparse reshape op #7125

Closed
wants to merge 38 commits into from
Closed

Conversation

codeislife99
Copy link
Contributor

@codeislife99 codeislife99 commented Dec 17, 2020

This PR is for adding support for sparse reshape OP (https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/sparse/reshape) as a part of a larger effort to add sparse operator support. (#7126, #7149)

  1. This op will only be used for TF models. There are no equivalent definitions in other frameworks.
  2. Currently, I have been informed that the input shapes to these ops are all static[ if the input tensor size is fixed ], and hence the current implementation.

@codeislife99 codeislife99 marked this pull request as draft December 17, 2020 21:29
@codeislife99 codeislife99 marked this pull request as ready for review December 18, 2020 00:26
@codeislife99
Copy link
Contributor Author

@trevor-m @comaniac @zhiics PTAL

Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM AFAICT. Just miner comments about formatting.
Will wait for @zhiics 's review.

include/tvm/topi/transform.h Outdated Show resolved Hide resolved
python/tvm/relay/op/transform.py Outdated Show resolved Hide resolved
python/tvm/relay/op/transform.py Show resolved Hide resolved
python/tvm/topi/transform.py Outdated Show resolved Hide resolved
python/tvm/topi/transform.py Show resolved Hide resolved
Copy link
Contributor

@tkonolige tkonolige left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I've left some comments. In general looks good though. I appreciate the included examples.

By the way, you seems to have accidentally bumped 3rdparty/vta-hw. You should probably reset that to whatever is at head.

Copy link
Contributor

@tkonolige tkonolige left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I somehow lost my previous comments.

python/tvm/relay/op/_transform.py Outdated Show resolved Hide resolved

def sparsereshape(sparse_indices, sparse_values, prev_shape, new_shape):
"""
Reshape a Sparse Tensor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you note that this function only support tensors in COO format, not CSR. In other parts of the codebase, we tend to use CSR.

Copy link
Contributor Author

@codeislife99 codeislife99 Dec 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain how this convention is different from the sparse_to_dense operator. I could only find that operator as an example of existing representations ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The convention is the same as sparse_to_dense. However sparse_dense uses CSR and BSR formats. We should probably add documentation to sparse_to_dense too.

src/relay/op/tensor/transform.cc Outdated Show resolved Hide resolved
src/relay/op/tensor/transform.cc Outdated Show resolved Hide resolved
src/relay/op/tensor/transform.cc Outdated Show resolved Hide resolved
src/relay/op/tensor/transform.cc Outdated Show resolved Hide resolved
src/relay/op/tensor/transform.cc Outdated Show resolved Hide resolved
@codeislife99
Copy link
Contributor Author

@tkonolige @mbrookhart I have added the TF Frontend code. Can I get a re-review on this PR following up on our discussion last week ?

@anijain2305
Copy link
Contributor

Style and organization-wise LGTM. @tkonolige @mbrookhart Can you PTAL?

@mbrookhart
Copy link
Contributor

I'm out on leave this week, so I won't be giving it a full review until Monday. @codeislife99 , do you have a branch where you tested the TF model with all three ops?

@codeislife99
Copy link
Contributor Author

Thats understandable ofcourse, I can wait for the full review. In the meantime I will create a branch with all the three Ops and E2E testing.

Copy link
Contributor

@mbrookhart mbrookhart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will work with dynamic input shapes, and from our conversation, my understanding is that you expect a chain of these ops with dynamic slicing in between them in the model you're targeting. I don't want to merge this until we've validated that actually works.

Otherwise, things look good.

Comment on lines +1407 to +1408
int new_shape_size = GetConstInt(new_shape->shape[0]);
int prev_shape_size = GetConstInt(prev_shape->shape[0]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main complaint is that this will fail with dynamic input shapes. From what I understand, you expect multiple chained dynamically-shaped sparse ops in the model you're trying to target, so I'm hesitant to merge this because I'm under the impression that this will not solve the larger problem you're trying to solve.

I'd really like to see you either test the model in a branch containing all three of your PRs, or write a unit test with a representative subgraph.

@codeislife99
Copy link
Contributor Author

I have a completely new implementation which addresses the many issues in this PR regarding performance and dynamic shapes in #7477 . I am closing this PR. Please review the new PR here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants